- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0001000001000000
- More
- Availability
-
11
- Author / Contributor
- Filter by Author / Creator
-
-
Aloimonos, Yiannis (2)
-
Metzler, Christopher A (2)
-
Wang, Tianfu (2)
-
Burner, Levi (1)
-
Cai, Haoming Cai (1)
-
Chen, Jingxi (1)
-
Feng, Brandon (1)
-
Fermuller, Cornelia (1)
-
Fermüller, Cornelia (1)
-
Islam, Md Jahidul (1)
-
Siddique, Md_Abu Bakr (1)
-
Wu, Jiayi (1)
-
Yuan, Dehao (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Video Frame Interpolation aims to recover realistic missing frames between observed frames, generating a highframe- rate video from a low-frame-rate video. However, without additional guidance, the large motion between frames makes this problem ill-posed. Event-based Video Frame Interpolation (EVFI) addresses this challenge by using sparse, high-temporal-resolution event measurements as motion guidance. This guidance allows EVFI methods to significantly outperform frame-only methods. However, to date, EVFI methods have relied on a limited set of paired eventframe training data, severely limiting their performance and generalization capabilities. In this work, we overcome the limited data challenge by adapting pre-trained video diffusion models trained on internet-scale datasets to EVFI. We experimentally validate our approach on real-world EVFI datasets, including a new one that we introduce. Our method outperforms existing methods and generalizes across cameras far better than existing approaches.more » « lessFree, publicly-accessible full text available June 21, 2026
-
Wu, Jiayi; Wang, Tianfu; Siddique, Md_Abu Bakr; Islam, Md Jahidul; Fermuller, Cornelia; Aloimonos, Yiannis; Metzler, Christopher A (, IEEE Transactions on Pattern Analysis and Machine Intelligence)Underwater image restoration aims to recover color, contrast, and appearance in underwater scenes, crucial for fields like marine ecology and archaeology. While pixel-domain diffusion methods work for simple scenes, they are computationally heavy and produce artifacts in complex, depth-varying scenes. We present a single-step latent diffusion method, SLURPP (Single-step Latent Underwater Restoration with Pretrained Priors), that overcomes these limitations by combining a novel network architecture with an accurate synthetic data generation pipeline. SLURPP combines pretrained latent diffusion models - which encode strong priors on the geometry and depth of scenes with an explicit scene decomposition, which allows one to model and account for the effects of light attenuation and backscattering. To train SLURPP, we design a physics-based underwater image synthesis pipeline that applies varied and realistic underwater degradation effects to existing terrestrial image datasets. We evaluate our method extensively on both synthetic and real-world benchmarks and demonstrate state-of-the-art performance.more » « less
An official website of the United States government
